4 research outputs found

    Proof Complexity for Quantified Boolean Formulas

    Get PDF
    Quantified Boolean formulas (QBF) extend the propositional satisfiability problem by allowing variables to be universally as well as existentially quantified. Deciding whether a QBF is true or false is PSPACE-complete and a wide range of mathematical and industrial problems can be expressed as QBFs. QBF proof complexity is the theoretical analysis of algorithmic techniques for solving QBFs. We make a detailed comparison of the proof systems Q-Res, QU-Res, and ∀Exp + Res which extend propositional Resolution with different rules for reasoning about universally quantified variables. We give new simulation and separation results between these proof systems under two natural restrictions, when the proofs are tree-like, and when the QBFs have bounded quantifier complexity. We consider a strong QBF proof system, QRAT, proposed as a universal proof checking format. We show that, unless P = PSPACE, QRAT does not admit strategy extraction. This is proved by constructing a family of QBFs that have short QRAT proofs but whose strategies are hard to compute in general. We also explore why strategy extraction fails for QRAT, including presenting a restricted version of QRAT which does admit strategy extraction. We study two results from propositional proof complexity and their analogues in QBF proof complexity, showing in both cases how the additional complexity of QBF solving compared to refuting propositional formulas causes these results to fail in the QBF setting

    Data Generation for Neural Programming by Example

    Get PDF
    Programming by example is the problem of synthesizing a program from a small set of input / output pairs. Recent works applying machine learning methods to this task show promise, but are typically reliant on generating synthetic examples for training. A particular challenge lies in generating meaningful sets of inputs and outputs, which well-characterize a given program and accurately demonstrate its behavior. Where examples used for testing are generated by the same method as training data then the performance of a model may be partly reliant on this similarity. In this paper we introduce a novel approach using an SMT solver to synthesize inputs which cover a diverse set of behaviors for a given program. We carry out a case study comparing this method to existing synthetic data generation procedures in the literature, and find that data generated using our approach improves both the discriminatory power of example sets and the ability of trained machine learning models to generalize to unfamiliar data
    corecore